Developing and least developed countries face the dire challenge of ensuring that each child in their country receives required doses of vaccination, adequate nutrition and proper medication. International agencies such as UNICEF, WHO and WFP, among other organizations, strive to find innovative solutions to determine which child has received the benefits and which have not. Biometric recognition systems have been sought out to help solve this problem. To that end, this report establishes a baseline accuracy of a commercial contactless palmprint recognition system that may be deployed for recognizing children in the age group of one to five years old. On a database of contactless palmprint images of one thousand unique palms from 500 children, we establish SOTA authentication accuracy of 90.85% @ FAR of 0.01%, rank-1 identification accuracy of 99.0% (closed set), and FPIR=0.01 @ FNIR=0.3 for open-set identification using PalmMobile SDK from Armatura.
translated by 谷歌翻译
The use of vision transformers (ViT) in computer vision is increasing due to limited inductive biases (e.g., locality, weight sharing, etc.) and increased scalability compared to other deep learning methods. This has led to some initial studies on the use of ViT for biometric recognition, including fingerprint recognition. In this work, we improve on these initial studies for transformers in fingerprint recognition by i.) evaluating additional attention-based architectures, ii.) scaling to larger and more diverse training and evaluation datasets, and iii.) combining the complimentary representations of attention-based and CNN-based embeddings for improved state-of-the-art (SOTA) fingerprint recognition (both authentication and identification). Our combined architecture, AFR-Net (Attention-Driven Fingerprint Recognition Network), outperforms several baseline transformer and CNN-based models, including a SOTA commercial fingerprint system, Verifinger v12.3, across intra-sensor, cross-sensor, and latent to rolled fingerprint matching datasets. Additionally, we propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations, which boosts the overall recognition accuracy significantly across each of the models. This realignment strategy requires no additional training and can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance.
translated by 谷歌翻译
深度神经网络(DNN)在学习指纹的固定长度表示方面表现出了不可思议的希望。由于表示学习通常集中在捕获特定的先验知识(例如细节)上,因此没有普遍的表示可以全面地封装在指纹中的所有歧视性信息。在学习一系列表示的过程中可以缓解这个问题,但需要解决两个关键的挑战:(i)如何从相同的指纹图像中提取多种不同的表示? (ii)如何在匹配过程中最佳利用这些表示形式?在这项工作中,我们在输入图像的不同转换上训练多个Deepprint(一种基于DNN的指纹编码器)的多个实例,以生成指纹嵌入的集合。我们还提出了一种功能融合技术,该技术将这些多个表示形式提炼成单个嵌入,该技术忠实地捕获了合奏中存在的多样性而不会增加计算复杂性。已在五个数据库中进行了全面评估所提出的方法,这些数据库包含滚动,普通和潜在的指纹(NIST SD4,NIST SD14,NIST SD14,NIST SD27,NIST SD302和FVC2004 DB2A)和统计上的显着改进,在验证范围内已始终如一地证明以及封闭式和开放设定的标识设置。提出的方法是能够提高任何基于DNN识别系统的准确性的包装器。
translated by 谷歌翻译
鉴于完整的指纹图像(滚动或拍打),我们介绍了Cyclegan模型,以生成与完整印刷相同身份的多个潜在印象。我们的模型可以控制生成的潜在打印图像中的失真,噪声,模糊和遮挡程度,以获得NIST SD27潜在数据库中介绍的好,坏和丑陋的潜在图像类别。我们的工作的贡献是双重的:(i)证明合成生成的潜在指纹图像与NIST SD27和MSP数据库中的犯罪现场潜伏期的相似性,并由NIST NIST NFIQ 2质量度量和由SOTA指纹匹配器和ROC曲线评估。 (ii)使用合成潜伏期在公共领域增强小型的潜在训练数据库,以提高Deepprint的性能,Deepprint是一种SOTA指纹匹配器,设计用于在三个潜在数据库上滚动的指纹匹配(NIST SD27,NIST SD302和IIITD,以及IIITD,以及IIITD,以及IIITD,以及-slf)。例如,随着合成潜在数据的增强,在具有挑战性的NIST SD27潜在数据库中,Deepprint的排名1检索性能从15.50%提高到29.07%。我们生成合成潜在指纹的方法可用于改善任何潜在匹配器及其单个组件的识别性能(例如增强,分割和特征提取)。
translated by 谷歌翻译
人的步态被认为是一种独特的生物识别标识符,其可以在距离处以覆盖方式获取。但是,在受控场景中捕获的现有公共领域步态数据集接受的模型导致应用于现实世界无约束步态数据时的剧烈性能下降。另一方面,视频人员重新识别技术在大规模公共可用数据集中实现了有希望的性能。鉴于服装特性的多样性,衣物提示对于人们的认可不可靠。因此,实际上尚不清楚为什么最先进的人重新识别方法以及他们的工作。在本文中,我们通过从现有的视频人重新识别挑战中提取剪影来构建一个新的步态数据集,该挑战包括1,404人以不受约束的方式行走。基于该数据集,可以进行步态认可与人重新识别之间的一致和比较研究。鉴于我们的实验结果表明,目前在受控情景收集的数据下设计的目前的步态识别方法不适合真实监视情景,我们提出了一种名为Realgait的新型步态识别方法。我们的结果表明,在实际监视情景中识别人的步态是可行的,并且潜在的步态模式可能是视频人重新设计在实践中的真正原因。
translated by 谷歌翻译
在指纹识别领域工作的研究人员的主要障碍是缺乏公开的,大规模的指纹数据集。确实存在的公开数据集包含每个手指的少数身份和印象。这限制了关于许多主题的研究,包括例如,使用深网络来学习固定长度指纹嵌入。因此,我们提出了Printsgan,一种能够产生独特指纹的合成指纹发生器以及给定指纹的多个印象。使用Printsgan,我们合成525,000个指纹的数据库(35,000个不同的手指,每次有15个印象)。接下来,我们通过训练深网络来提取来自指纹的固定长度嵌入的固定长度来显示Printsgan生成的数据集的实用程序。特别是,对我们的合成指纹培训并进行微调的嵌入式模型和在NIST SD302的25,000个印刷品上进行微调)在NIST SD4数据库上获得87.03%的焦点为87.03%(一个升压)当仅在NIST SD302上培训时,来自Tar = 73.37%)。普遍的合成指纹产生方法不会使I)缺乏现实主义或ii)无法产生多个印象。我们计划向公众释放我们的合成指纹数据库。
translated by 谷歌翻译
匹配的非接触式指纹或手指照片到基于接触的指纹印象在Covid-19尾之后,由于非接触式采集的优越性卫生以及能够以足够的分辨率捕获指纹照片的低成本移动电话的广泛可用性用于验证目的。本文介绍了一个名为C2CL的端到端自动化系统,包括移动手指照片捕获应用,预处理和匹配算法,以处理抑制先前交叉匹配方法的挑战;即i)低脊谷非接触式指纹对比,II)不同卷,俯仰,偏航和手指的距离,III的距离,III)非线性扭曲的基于接触的指纹,和VI)智能手机的不同图像质量。相机。我们的预处理算法段,增强,尺度和不可接受的非接触式指纹,而我们的匹配算法提取细节和纹理表示。使用我们的移动捕获App获取的206个受理接触式2D指纹和基于相应的基于接触的指纹的DataSet和来自206个受试者(每个受试者的2拇指和2个索引手指的指纹)用于评估我们所提出的算法的跨数据库性能。此外,在3个公共数据集上的额外实验结果表明,最先进的与非接触式指纹匹配(焦油为96.67%至98.30%,= 0.01%的焦油)显着提高。
translated by 谷歌翻译
我们提出了一种方法,可以针对加密域中的大型画廊搜索探针(或查询)图像表示。我们要求探针和画廊图像以固定长度表示形式表示,这对于从学习的网络获得的表示是典型的。我们的加密方案对如何获得固定长度表示不可知,因此可以应用于任何应用域中的任何固定长度表示。我们的方法被称为HERS(同派加密表示搜索),是通过(i)压缩表示其估计的固有维度的表示,而准确性的最小损失(ii)使用拟议的完全同质加密方案和(iii)有效地加密压缩表示形式(ii)直接在加密域中直接搜索加密表示的画廊,而不会解密它们。大型面部,指纹和对象数据集(例如ImageNet)上的数值结果表明,在加密域中,首次准确且快速的图像搜索是可行的(500秒; $ 275 \ times $ 275 \ times $ speed胜过状态 - 与1亿个画廊的加密搜索有关)。代码可从https://github.com/human-analysis/hers-ecrypted-image-search获得。
translated by 谷歌翻译
Speech systems are sensitive to accent variations. This is especially challenging in the Indian context, with an abundance of languages but a dearth of linguistic studies characterising pronunciation variations. The growing number of L2 English speakers in India reinforces the need to study accents and L1-L2 interactions. We investigate the accents of Indian English (IE) speakers and report in detail our observations, both specific and common to all regions. In particular, we observe the phonemic variations and phonotactics occurring in the speakers' native languages and apply this to their English pronunciations. We demonstrate the influence of 18 Indian languages on IE by comparing the native language pronunciations with IE pronunciations obtained jointly from existing literature studies and phonetically annotated speech of 80 speakers. Consequently, we are able to validate the intuitions of Indian language influences on IE pronunciations by justifying pronunciation rules from the perspective of Indian language phonology. We obtain a comprehensive description in terms of universal and region-specific characteristics of IE, which facilitates accent conversion and adaptation of existing ASR and TTS systems to different Indian accents.
translated by 谷歌翻译
Robots have been steadily increasing their presence in our daily lives, where they can work along with humans to provide assistance in various tasks on industry floors, in offices, and in homes. Automated assembly is one of the key applications of robots, and the next generation assembly systems could become much more efficient by creating collaborative human-robot systems. However, although collaborative robots have been around for decades, their application in truly collaborative systems has been limited. This is because a truly collaborative human-robot system needs to adjust its operation with respect to the uncertainty and imprecision in human actions, ensure safety during interaction, etc. In this paper, we present a system for human-robot collaborative assembly using learning from demonstration and pose estimation, so that the robot can adapt to the uncertainty caused by the operation of humans. Learning from demonstration is used to generate motion trajectories for the robot based on the pose estimate of different goal locations from a deep learning-based vision system. The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario. We show successful generalization of the system's operation to changes in the initial and final goal locations through various experiments.
translated by 谷歌翻译